Goto

Collaborating Authors

 design element


Windows Photos adds fancy editing features from other Microsoft apps

PCWorld

Microsoft is adding ways to make the Windows Photos app much more powerful, combining elements of the elegant Designer app and making Photos more of a centerpiece for visual editing. Microsoft is taking optical-character recognition capabilities that it developed several years ago and adding them to Photos, while pulling in design elements from Microsoft Designer, too. Finally, the company is beefing up File Explorer a bit as well, giving it a more robust visual search capability. Unfortunately, it's also adding a Copilot button as well, which for now doesn't really do much. Microsoft's Windows Photos app languished for years, but it started enjoying a renaissance about two years ago with new AI-powered editing features.


From Fragment to One Piece: A Survey on AI-Driven Graphic Design

Zou, Xingxing, Zhang, Wen, Zhao, Nanxuan

arXiv.org Artificial Intelligence

This survey provides a comprehensive overview of the advancements in Artificial Intelligence in Graphic Design (AIGD), focusing on integrating AI techniques to support design interpretation and enhance the creative process. We categorize the field into two primary directions: perception tasks, which involve understanding and analyzing design elements, and generation tasks, which focus on creating new design elements and layouts. The survey covers various subtasks, including visual element perception and generation, aesthetic and semantic understanding, layout analysis, and generation. We highlight the role of large language models and multimodal approaches in bridging the gap between localized visual features and global design intent. Despite significant progress, challenges remain to understanding human intent, ensuring interpretability, and maintaining control over multilayered compositions. This survey serves as a guide for researchers, providing information on the current state of AIGD and potential future directions\footnote{https://github.com/zhangtianer521/excellent\_Intelligent\_graphic\_design}.


Design-o-meter: Towards Evaluating and Refining Graphic Designs

Goyal, Sahil, Mahajan, Abhinav, Mishra, Swasti, Udhayanan, Prateksha, Shukla, Tripti, Joseph, K J, Srinivasan, Balaji Vasan

arXiv.org Artificial Intelligence

Graphic designs are an effective medium for visual communication. They range from greeting cards to corporate flyers and beyond. Off-late, machine learning techniques are able to generate such designs, which accelerates the rate of content production. An automated way of evaluating their quality becomes critical. Towards this end, we introduce Design-o-meter, a data-driven methodology to quantify the goodness of graphic designs. Further, our approach can suggest modifications to these designs to improve its visual appeal. To the best of our knowledge, Design-o-meter is the first approach that scores and refines designs in a unified framework despite the inherent subjectivity and ambiguity of the setting. Our exhaustive quantitative and qualitative analysis of our approach against baselines adapted for the task (including recent Multimodal LLM-based approaches) brings out the efficacy of our methodology. We hope our work will usher more interest in this important and pragmatic problem setting.


Neural Contrast: Leveraging Generative Editing for Graphic Design Recommendations

Lupascu, Marian, Mironica, Ionut, Stupariu, Mihai-Sorin

arXiv.org Artificial Intelligence

Creating visually appealing composites requires optimizing both text and background for compatibility. Previous methods have focused on simple design strategies, such as changing text color or adding background shapes for contrast. These approaches are often destructive, altering text color or partially obstructing the background image. Another method involves placing design elements in non-salient and contrasting regions, but this isn't always effective, especially with patterned backgrounds. To address these challenges, we propose a generative approach using a diffusion model. This method ensures the altered regions beneath design assets exhibit low saliency while enhancing contrast, thereby improving the visibility of the design asset.


The AI-DEC: A Card-based Design Method for User-centered AI Explanations

Lee, Christine P, Lee, Min Kyung, Mutlu, Bilge

arXiv.org Artificial Intelligence

Increasing evidence suggests that many deployed AI systems do not sufficiently support end-user interaction and information needs. Engaging end-users in the design of these systems can reveal user needs and expectations, yet effective ways of engaging end-users in the AI explanation design remain under-explored. To address this gap, we developed a design method, called AI-DEC, that defines four dimensions of AI explanations that are critical for the integration of AI systems -- communication content, modality, frequency, and direction -- and offers design examples for end-users to design AI explanations that meet their needs. We evaluated this method through co-design sessions with workers in healthcare, finance, and management industries who regularly use AI systems in their daily work. Findings indicate that the AI-DEC effectively supported workers in designing explanations that accommodated diverse levels of performance and autonomy needs, which varied depending on the AI system's workplace role and worker values. We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.


Graphic Design with Large Multimodal Model

Cheng, Yutao, Zhang, Zhao, Yang, Maoke, Nie, Hui, Li, Chunyuan, Wu, Xinglong, Shao, Jie

arXiv.org Artificial Intelligence

In the field of graphic design, automating the integration of design elements into a cohesive multi-layered artwork not only boosts productivity but also paves the way for the democratization of graphic design. One existing practice is Graphic Layout Generation (GLG), which aims to layout sequential design elements. It has been constrained by the necessity for a predefined correct sequence of layers, thus limiting creative potential and increasing user workload. In this paper, we present Hierarchical Layout Generation (HLG) as a more flexible and pragmatic setup, which creates graphic composition from unordered sets of design elements. To tackle the HLG task, we introduce Graphist, the first layout generation model based on large multimodal models. Graphist efficiently reframes the HLG as a sequence generation problem, utilizing RGB-A images as input, outputs a JSON draft protocol, indicating the coordinates, size, and order of each element. We develop multiple evaluation metrics for HLG. Graphist outperforms prior arts and establishes a strong baseline for this field.


Layout2Rendering: AI-aided Greenspace design

Chen, Ran, Lian, Zeke, He, Yueheng, Ling, Xiao, Yang, Fuyu, Yao, Xueqi, Yi, Xingjian, Zhao, Jing

arXiv.org Artificial Intelligence

In traditional human living environment landscape design, the establishment of three-dimensional models is an essential step for designers to intuitively present the spatial relationships of design elements, as well as a foundation for conducting landscape analysis on the site. Rapidly and effectively generating beautiful and realistic landscape spaces is a significant challenge faced by designers. Although generative design has been widely applied in related fields, they mostly generate three-dimensional models through the restriction of indicator parameters. However, the elements of landscape design are complex and have unique requirements, making it difficult to generate designs from the perspective of indicator limitations. To address these issues, this study proposes a park space generative design system based on deep learning technology. This system generates design plans based on the topological relationships of landscape elements, then vectorizes the plan element information, and uses Grasshopper to generate three-dimensional models while synchronously fine-tuning parameters, rapidly completing the entire process from basic site conditions to model effect analysis. Experimental results show that: (1) the system, with the aid of AI-assisted technology, can rapidly generate space green space schemes that meet the designer's perspective based on site conditions; (2) this study has vectorized and three-dimensionalized various types of landscape design elements based on semantic information; (3) the analysis and visualization module constructed in this study can perform landscape analysis on the generated three-dimensional models and produce node effect diagrams, allowing users to modify the design in real time based on the effects, thus enhancing the system's interactivity.


Language-based Photo Color Adjustment for Graphic Designs

Wang, Zhenwei, Zhao, Nanxuan, Hancke, Gerhard, Lau, Rynson W. H.

arXiv.org Artificial Intelligence

Adjusting the photo color to associate with some design elements is an essential way for a graphic design to effectively deliver its message and make it aesthetically pleasing. However, existing tools and previous works face a dilemma between the ease of use and level of expressiveness. To this end, we introduce an interactive language-based approach for photo recoloring, which provides an intuitive system that can assist both experts and novices on graphic design. Given a graphic design containing a photo that needs to be recolored, our model can predict the source colors and the target regions, and then recolor the target regions with the source colors based on the given language-based instruction. The multi-granularity of the instruction allows diverse user intentions. The proposed novel task faces several unique challenges, including: 1) color accuracy for recoloring with exactly the same color from the target design element as specified by the user; 2) multi-granularity instructions for parsing instructions correctly to generate a specific result or multiple plausible ones; and 3) locality for recoloring in semantically meaningful local regions to preserve original image semantics. To address these challenges, we propose a model called LangRecol with two main components: the language-based source color prediction module and the semantic-palette-based photo recoloring module. We also introduce an approach for generating a synthetic graphic design dataset with instructions to enable model training. We evaluate our model via extensive experiments and user studies. We also discuss several practical applications, showing the effectiveness and practicality of our approach. Code and data for this paper are at: https://zhenwwang.github.io/langrecol.


Towards Dialogue Systems with Agency in Human-AI Collaboration Tasks

Sharma, Ashish, Rao, Sudha, Brockett, Chris, Malhotra, Akanksha, Jojic, Nebojsa, Dolan, Bill

arXiv.org Artificial Intelligence

Agency, the capacity to proactively shape events, is crucial to how humans interact and collaborate with other humans. In this paper, we investigate Agency as a potentially desirable function of dialogue agents, and how it can be measured and controlled. We build upon the social-cognitive theory of Bandura (2001) to develop a framework of features through which Agency is expressed in dialogue -- indicating what you intend to do (Intentionality), motivating your intentions (Motivation), having self-belief in intentions (Self-Efficacy), and being able to self-adjust (Self-Regulation). We collect and release a new dataset of 83 human-human collaborative interior design conversations containing 908 conversational snippets annotated for Agency features. Using this dataset, we explore methods for measuring and controlling Agency in dialogue systems. Automatic and human evaluation show that although a baseline GPT-3 model can express Intentionality, models that explicitly manifest features associated with high Motivation, Self-Efficacy, and Self-Regulation are better perceived as being highly agentive. This work has implications for the development of dialogue systems with varying degrees of Agency in collaborative tasks.


AI-Powered Graphic Design: Exploring the Intersection of Creativity and Technology

#artificialintelligence

Welcome to the exciting world of graphic design where creativity has no more limits. AI has been rapidly evolving in recent years, and it has already started to revolutionise the way we approach design. This can save designers a lot of time and effort, allowing them to focus on the more creative aspects of their work. One of the most significant advantages of using AI in graphic design is that it can help designers generate new and innovative ideas. AI-powered tools can analyze vast amounts of data, identify patterns, and offer suggestions that human designers might not have thought of otherwise.